The search functionality is under construction.

Keyword Search Result

[Keyword] neural networks(287hit)

181-200hit(287hit)

  • A Cascade Neural Network for Blind Signal Extraction without Spurious Equilibria

    Ruck THAWONMAS  Andrzej CICHOCKI  Shun-ichi AMARI  

     
    PAPER-Neural Networks

      Vol:
    E81-A No:9
      Page(s):
    1833-1846

    We present a cascade neural network for blind source extraction. We propose a family of unconstrained optimization criteria, from which we derive a learning rule that can extract a single source signal from a linear mixture of source signals. To prevent the newly extracted source signal from being extracted again in the next processing unit, we propose another unconstrained optimization criterion that uses knowledge of this signal. From this criterion, we then derive a learning rule that deflates from the mixture the newly extracted signal. By virtue of blind extraction and deflation processing, the presented cascade neural network can cope with a practical case where the number of mixed signals is equal to or larger than the number of sources, with the number of sources not known in advance. We prove analytically that the proposed criteria both for blind extraction and deflation processing have no spurious equilibria. In addition, the proposed criteria do not require whitening of mixed signals. We also demonstrate the validity and performance of the presented neural network by computer simulation experiments.

  • Dynamical Neural Network Model for Hippocampal Memory

    Osamu ARAKI  Kazuyuki AIHARA  

     
    PAPER-Neural Networks

      Vol:
    E81-A No:9
      Page(s):
    1824-1832

    The hippocampus is thought to play an important role in the transformation from short-term memory into long-term memory, which is called consolidation. The physiological phenomenon of synaptic change called LTP or LTD has been studied as a basic mechanism for learning and memory. The neural network mechanism of the consolidation, however, is not clarified yet. The authors' approach is to construct information processing theory in learning and memory, which can explain the physiological data and behavioral data. This paper proposes a dynamical hippocampal model which can store and recall spatial input patterns. The authors assume that the primary functions of hippocampus are to store episodic information of sensory signals and to keep them for a while until the neocortex stores them as a long-term memory. On the basis of the hippocampal architecture and hypothetical synaptic dynamics of LTP/LTD, the authors construct a hippocampal model. This model considers: (1) divergent connections, (2) the synaptic dynamics of LTP and LTD based on pre- and postsynaptic coincidence, and (3) propagation of LTD. Computer simulations show that this model can store and recall its input spatial pattern by self-organizing closed activating pathways. By the backward propagation of LTD, the synaptic pathway for a specific spatial input pattern can be selected among the divergent closed connections. In addition, the output pattern also suggests that this model is sensitive to the temporal timing of input signals. This timing sensitivity suggests the applicability to spatio-temporal input patterns of this model. Future extensions of this model are also discussed.

  • A Flexible Learning Algorithm for Binary Neural Networks

    Atsushi YAMAMOTO  Toshimichi SAITO  

     
    PAPER-Neural Networks

      Vol:
    E81-A No:9
      Page(s):
    1925-1930

    This paper proposes a simple learning algorithm that can realize any boolean function using the three-layer binary neural networks. The algorithm has flexible learning functions. 1) moving "core" for the inputs separations,2) "don't care" settings of the separated inputs. The "don't care" inputs do not affect the successive separations. Performing numerical simulations on some typical examples, we have verified that our algorithm can give less number of hidden layer neurons than those by conventional ones.

  • Asynchronous Pulse Neural Network Model for VLSI Implementation

    Mitsuru HANAGATA  Yoshihiko HORIO  Kazuyuki AIHARA  

     
    PAPER-Neural Networks

      Vol:
    E81-A No:9
      Page(s):
    1853-1859

    An asynchronous pulse neural network model which is suitable for VLSI implementation is proposed. The model neuron can function as a coincidence detector as well as an integrator depending on its internal time-constant relative to the external one, and show complex dynamical behavior including chaotic responses. A network with the proposed neurons can process spatio-temporal coded information through dynamical cell assemblies with functional synaptic connections.

  • Spatial Resolution Improvement of a Low Spatial Resolution Thermal Infrared Image by Backpropagated Neural Networks

    Maria del Carmen VALDES  Minoru INAMURA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E81-D No:8
      Page(s):
    872-880

    Recent progress in neural network research has demonstrated the usefulness of neural networks in a variety of areas. In this work, its application in the spatial resolution improvement of a remotely sensed low resolution thermal infrared image using high spatial resolution of visible and near-infrared images from Landsat TM sensor is described. The same work is done by an algebraic method. The tests developed are explained and examples of the results obtained in each test are shown and compared with each other. The error analysis is also carried out. Future improvements of these methods are evaluated.

  • On a Code-Excited Nonlinear Predictive Speech Coding (CENLP) by Means of Recurrent Neural Networks

    Ni MA  Tetsuo NISHI  Gang WEI  

     
    PAPER

      Vol:
    E81-A No:8
      Page(s):
    1628-1634

    To improve speech coding quality, in particular, the long-term dependency prediction characteristics, we propose a new nonlinear predictor, i. e. , a fully connected recurrent neural network (FCRNN) where the hidden units have feedbacks not only from themselves but also from the output unit. The comparison of the capabilities of the FCRNN with conventional predictors shows that the former has less prediction error than the latter. We apply this FCRNN instead of the previously proposed recurrent neural networks in the code-excited predictive speech coding system (i. e. , CELP) and shows that our system (FCRNN) requires less bit rate/frame and improves the performance for speech coding.

  • Kohonen Learning with a Mechanism, the Law of the Jungle, Capable of Dealing with Nonstationary Probability Distribution Functions

    Taira NAKAJIMA  Hiroyuki TAKIZAWA  Hiroaki KOBAYASHI  Tadao NAKAMURA  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E81-D No:6
      Page(s):
    584-591

    We present a mechanism, named the law of the jungle (LOJ), to improve the Kohonen learning. The LOJ is used to be an adaptive vector quantizer for approximating nonstationary probability distribution functions. In the LOJ mechanism, the probability that each node wins in a competition is dynamically estimated during the learning. By using the estimated win probability, "strong" nodes are increased through creating new nodes near the nodes, and "weak" nodes are decreased through deleting themselves. A pair of creation and deletion is treated as an atomic operation. Therefore, the nodes which cannot win the competition are transferred directly from the region where inputs almost never occur to the region where inputs often occur. This direct "jump" of weak nodes provides rapid convergence. Moreover, the LOJ requires neither time-decaying parameters nor a special periodic adaptation. From the above reasons, the LOJ is suitable for quick approximation of nonstationary probability distribution functions. In comparison with some other Kohonen learning networks through experiments, only the LOJ can follow nonstationary probability distributions except for under high-noise environments.

  • Conditional-Class-Entropy-Based Segmentation of Brain MR Images on a Neural Tree Classifier

    Iren VALOVA  Yusuke SUGANAMI  Yukio KOSUGI  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E81-D No:4
      Page(s):
    382-390

    Segmenting the images obtained from magnetic resonance imaging (MRI) is an important process for visualization of the human soft tissues. For the application of MR, we often have to introduce a reasonable segmentation technique. Neural networks may provide us with superior solutions for the pattern classification of medical images than the conventional methods. For image segmentation with the aid of neural networks of a reasonable size, it is important to select the most effective combination of secondary indices to be used for the classification. In this paper, we introduce a vector quantized class entropy (VQCCE) criterion to evaluate which indices are effective for pattern classification, without testing on the actual classifiers. We have exploited a newly developed neural tree classifier for accomplishing the segmentation task. This network effectively partitions the feature space into subregions and each final subregion is assigned a class label according to the data routed to it. As the tree grows on, the number of training data for each node decreases, which results in less weight update epochs and decreases the time consumption. The partitioning of the feature space at each node is done by a simple neural network; the appropriateness of which is measured by newly proposed estimation criterion, i. e. the measure for assessment of neuron (MAN). It facilitates the obtaining of a neuron with maximum correlation between a unit's value and the residual error at a given output. The application of this criterion guarantees adopting the best-fit neuron to split the feature space. The proposed neural classifier has achieved 95% correct classification rate on average for the white/gray matter segmentation problem. The performance of the proposed method is compared to that of a multilayered perceptron (MLP), the latter being widely exploited network in the field of image processing and pattern recognition. The experiments show the superiority of the introduced method in terms of less iterations and weight up dates necessary to train the neural network, i. e. lower computational complexity; as well as higher correct classification rate.

  • A Cascade Form Predictor of Neural and FIR Filters and Its Minimum Size Estimation Based on Nonlinearity Analysis of Time Series

    Ashraf A. M. KHALAF  Kenji NAKAYAMA  

     
    PAPER

      Vol:
    E81-A No:3
      Page(s):
    364-373

    Time series prediction is very important technology in a wide variety of fields. The actual time series contains both linear and nonlinear properties. The amplitude of the time series to be predicted is usually continuous value. For these reasons, we combine nonlinear and linear predictors in a cascade form. The nonlinear prediction problem is reduced to a pattern classification. A set of the past samples x(n-1),. . . ,x(n-N) is transformed into the output, which is the prediction of the next coming sample x(n). So, we employ a multi-layer neural network with a sigmoidal hidden layer and a single linear output neuron for the nonlinear prediction. It is called a Nonlinear Sub-Predictor (NSP). The NSP is trained by the supervised learning algorithm using the sample x(n) as a target. However, it is rather difficult to generate the continuous amplitude and to predict linear property. So, we employ a linear predictor after the NSP. An FIR filter is used for this purpose, which is called a Linear Sub-Predictor (LSP). The LSP is trained by the supervised learning algorithm using also x(n) as a target. In order to estimate the minimum size of the proposed predictor, we analyze the nonlinearity of the time series of interest. The prediction is equal to mapping a set of past samples to the next coming sample. The multi-layer neural network is good for this kind of pattern mapping. Still, difficult mappings may exist when several sets of very similar patterns are mapped onto very different samples. The degree of difficulty of the mapping is closely related to the nonlinearity. The necessary number of the past samples used for prediction is determined by this nonlinearity. The difficult mapping requires a large number of the past samples. Computer simulations using the sunspot data and the artificially generated discrete amplitude data have demonstrated the efficiency of the proposed predictor and the nonlinearity analysis.

  • Training Data Selection Method for Generalization by Multilayer Neural Networks

    Kazuyuki HARA  Kenji NAKAYAMA  

     
    PAPER

      Vol:
    E81-A No:3
      Page(s):
    374-381

    A training data selection method is proposed for multilayer neural networks (MLNNs). This method selects a small number of the training data, which guarantee both generalization and fast training of the MLNNs applied to pattern classification. The generalization will be satisfied using the data locate close to the boundary of the pattern classes. However, if these data are only used in the training, convergence is slow. This phenomenon is analyzed in this paper. Therefore, in the proposed method, the MLNN is first trained using some number of the data, which are randomly selected (Step 1). The data, for which the output error is relatively large, are selected. Furthermore, they are paired with the nearest data belong to the different class. The newly selected data are further paired with the nearest data. Finally, pairs of the data, which locate close to the boundary, can be found. Using these pairs of the data, the MLNNs are further trained (Step 2). Since, there are some variations to combine Steps 1 and 2, the proposed method can be applied to both off-line and on-line training. The proposed method can reduce the number of the training data, at the same time, can hasten the training. Usefulness is confirmed through computer simulation.

  • Detection of Surging Sound with Wavelet Transform and Neural Networks

    Manabu KOTANI  Yasuo UEDA  Kenzo AKAZAWA  Toshihide KANAGAWA  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E81-D No:3
      Page(s):
    329-335

    An acoustic diagnosis technique for the blower by wavelet transform and neural networks is described. It is important for this diagnosis to detect surging phenomena, which lead to the destruction of the blower. Dyadic wavelet transform is used as the pre-processing method. A multi-layered neural network is used as the discrimination method. Experiment is performed for a blower. The results show that the neural network with wavelet transform can detect surging sound well.

  • A Comparative Study of Eight Learning Algorithms for Artificial Neural Networks Based on a Real Application

    Yadira SOLANO  Hiroaki IKEDA  

     
    LETTER-Neural Networks

      Vol:
    E81-A No:2
      Page(s):
    355-357

    The aim of this study is to offer additional experimental evaluation on learning algorithms for artificial neural networks by testing and comparing the normalized backpropagation algorithm (NBP), previously proposed by the authors, and six other alternatives based on a particular application to financial forecasting. The algorithms are the original backpropagation (OBP), the NBP, backpropagation with momentum (two versions), the delta-bar-delta, the superSAB, the rprop and the quickprop algorithm.

  • A Tighter Upper Bound on Storage Capacity of Multilayer Networks

    Haruhisa TAKAHASHI  

     
    PAPER-Neural Networks

      Vol:
    E81-A No:2
      Page(s):
    333-339

    Typical concepts concerning memorizing capability of multilayer neural networks are statistical capacity and Vapnik-Chervonenkis (VC) dimension. These are differently defined each other according to intended applications. Although for the VC dimension several tighter upper bounds have been proposed, even if limited to networks with linear threshold elements, in literature, upper bounds on the statistical capacity are available only by the order of magnitude. We argue first that the proposed or ordinary formulation of the upper bound on the statistical capacity depends strongly on, and thus, it is possibly expressed by the number of the first hidden layer units. Then, we describe a more elaborated upper bound of the memorizing capacity of multilayer neural networks with linear threshold elements, which improves former results. Finally, a discussion of gaining good generalization is presented.

  • Neural Network Models for Blind Separation of Time Delayed and Convolved Signals

    Andrzej CICHOCKI  Shun-ichi AMARI  Jianting CAO  

     
    PAPER

      Vol:
    E80-A No:9
      Page(s):
    1595-1603

    In this paper we develop a new family of on-line adaptive learning algorithms for blind separation of time delayed and convolved sources. The algorithms are derived for feedforward and fully connected feedback (recurrent) neural networks on basis of modified natural gradient approach. The proposed algorithms can be considered as generalization and extension of existing algorithms for instantaneous mixture of unknown source signals. Preliminary computer simulations confirm validity and high performance of the proposed algorithms.

  • Necessary and Sufficient Condition for Absolute Exponential Stability of a Class of Nonsymmetric Neural Networks

    Xue-Bin LIANG  Toru YAMAGUCHI  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E80-D No:8
      Page(s):
    802-807

    In this paper, we prove that for a class of nonsymmetric neural networks with connection matrices T having nonnegative off-diagonal entries, -T is an M-matrix is a necessary and sufficient condition for absolute exponential stability of the network belonging to this class. While this result extends the existing one of absolute stability in Forti, et al., its proof given in this paper is simpler, which is completed by an approach different from one used in Forti, et al. The most significant consequence is that the class of nonsymmetric neural networks with connection matrices T satisfying -T is an M-matrix is the largest class of nonsymmetric neural networks that can be employed for embedding and solving optimization problem with global exponential rate of convergence to the optimal solution and without the risk of spurious responses. An illustrating numerical example is also given.

  • Absolute Exponential Stability of Neural Networks with Asymmetric Connection Matrices

    Xue-Bin LIANG  Toru YAMAGUCHI  

     
    LETTER-Neural Networks

      Vol:
    E80-A No:8
      Page(s):
    1531-1534

    In this letter, the absolute exponential stability result of neural networks with asymmetric connection matrices is obtained, which generalizes the existing one about absolute stability of neural networks, by a new proof approach. It is demonstrated that the network time constant is inversely proportional to the global exponential convergence rate of the network trajectories to the unique equilibrium. A numerical simulation example is also given to illustrate the obtained analysis results.

  • Self-Learning Analog Neural Network LSI with High-Resolution Non-Volatile Analog Memory and a Partially-Serial Weight-Update Architecture

    Takashi MORIE  Osamu FUJITA  Kuniharu UCHIMURA  

     
    PAPER-Neural Networks and Chips

      Vol:
    E80-C No:7
      Page(s):
    990-995

    A self-learning analog neural network LSI with non-volatile analog memory which can be updated with more than 13-bit resolution has been designed, fabricated and tasted for the first time. The non-volatile memory is attained by a new floating-gate MOSFET device that has a charge injection part and an accumulation part separated by a high resistance. We also propose a partially-serial weight-update architecture in which the plural synapse circuits use a weight-update circuit in common to reduce the circuit area. A prototype chip fabricated using a 1.3-µm double-poly CMOS process includes 50 synapse elements and its computational power is 10 MCPS. The weights can be updated at a rate of up to 40 kHz. This chip can be used to implement backpropagation networks, deterministic Boltzmann machines, and Hopfield networks with Hebbian learning.

  • Detecting Lung Cancer Symptoms with Analogic CNN Algorithms Based on a Constrained Diffusion Template

    Satoshi HIRAKAWA  Csaba REKECZKY  Yoshifumi NISHIO  Akio USHIDA  Tamas ROSKA  Junji UENO  Ishtiaq KASEM  Hiromu NISHITANI  

     
    LETTER-Nonlinear Problems

      Vol:
    E80-A No:7
      Page(s):
    1340-1344

    In this article, a new type of diffusion template and an analogic CNN algorithm using this diffusion template for detecting some lung cancer symptoms in X-ray films are proposed. The performance of the diffusion template is investigated and our CNN algorithm is verified to detect some key lung cancer symptoms, successfully.

  • A Learning Algorithm for a Neural Network LSI with Restricted Integer Weights

    Tomohisa KIMURA  Takeshi SHIMA  

     
    PAPER-Neural Networks and Chips

      Vol:
    E80-C No:7
      Page(s):
    983-989

    A novel learning algorithm for a neural network LSI which has low resolution synapse weights is proposed. Following a brief discussion of the synapse weight adaptation mechanism in the gradient descent scheme, we propose a way of achieving relaxation from the influence of discretized weight. Restriction of the number of synapses to be updated in one learning iteration is effective to relax the influence. Simulation results support the effectiveness of this learning algorithm. Low resolution synapses will be practical to realize large-scale neural network LSIs.

  • A Prediction Method of Non-Stationary Time Series Data by Using a Modular Structured Neural Network

    Eiji WATANABE  Noboru NAKASAKO  Yasuo MITANI  

     
    PAPER

      Vol:
    E80-A No:6
      Page(s):
    971-976

    This paper proposes a prediction method for non-stationary time series data with time varying parameters. A modular structured type neural network is newly introduced for the purpose of grasping the changing property of time varying parameters. This modular structured neural network is constructed by the hierarchical combination of each neural network (NNT: Neural Network for Prediction of Time Series Data) and a neural network (NNW: Neural Network for Prediction of Weights). Next, we propose a reasonable method for determination of the length of the local stationary section by using the additive learning ability of neural networks. Finally, the validity and effectiveness of the proposed method are confirmed through simulation and actual experiments.

181-200hit(287hit)